Global Linear Convergence of an Augmented Lagrangian Algorithm to Solve Convex Quadratic Optimization Problems
نویسندگان
چکیده
We consider an augmented Lagrangian algorithm for minimizing a convex quadratic function subject to linear inequality constraints. Linear optimization is an important particular instance of this problem. We show that, provided the augmentation parameter is large enough, the constraint value converges globally linearly to zero. This property is viewed as a consequence of the proximal interpretation of the algorithm and of the global radial Lipschitz continuity of the reciprocal of the dual function subdifferential. This Lipschitz property is itself obtained by means of a lemma of general interest, which compares the distances from a point in the positive orthant to an affine space, on the one hand, and to the polyhedron given by the intersection of this affine space and the positive orthant, on the other hand. No strict complementarity assumption is needed. The result is illustrated by numerical experiments and algorithmic implications, including complexity issues, are discussed.
منابع مشابه
Augmented Lagrangian method for solving absolute value equation and its application in two-point boundary value problems
One of the most important topic that consider in recent years by researcher is absolute value equation (AVE). The absolute value equation seems to be a useful tool in optimization since it subsumes the linear complementarity problem and thus also linear programming and convex quadratic programming. This paper introduce a new method for solving absolute value equation. To do this, we transform a...
متن کاملGlobal linear convergence of an augmented Lagrangian algorithm for solving convex quadratic optimization problems
We consider an augmented Lagrangian algorithm for minimizing a convex quadratic function subject to linear inequality constraints. Linear optimization is an important particular instance of this problem. We show that, provided the augmentation parameter is large enough, the constraint value converges globally linearly to zero. This property is viewed as a consequence of the proximal interpretat...
متن کاملAn efficient linearly convergent semismooth Netwon-CG augmented Lagrangian method for Lasso problems
We develop a fast and robust algorithm for solving large-scale convex composite optimization models with an emphasis on the `1-regularized least square regression (the Lasso) problems. Although there exist a large amount of solvers in the literature for Lasso problems, so far no solver can handle difficult real large scale regression problems. By relying on the piecewise linear-quadratic struct...
متن کاملFirst-order methods for constrained convex programming based on linearized augmented Lagrangian function
First-order methods have been popularly used for solving large-scale problems. However, many existing works only consider unconstrained problems or those with simple constraint. In this paper, we develop two first-order methods for constrained convex programs, for which the constraint set is represented by affine equations and smooth nonlinear inequalities. Both methods are based on the classic...
متن کاملAn Augmented Lagrangian Method for a Class of Lmi-constrained Problems in Robust Control Theory
We present a new approach to a class of non-convex LMI-constrained problem in robust control theory. The problems we consider may be recast as the minimization of a linear objective subject to linear matrix inequality (LMI) constraints in tandem with non-convex constraints related to rank conditions. We solve these problems using an extension of the augmented Lagrangian technique. The Lagrangia...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2005